Goto

Collaborating Authors

 interaction candidate




Appendices A The Persistence Interaction Detection Algorithm

Neural Information Processing Systems

Algorithm 1: The proposed Persistence Interaction Detection (PID) algorithmInput: A trained feed-forward neural network, target layer l, norm p. Output: ranked list of interaction candidates {I Our PID framework is presented in Algorithm 1. PID in all experiments of this paper (i.e., set η as 0). In this subsection, we will prove Theorem 1 and evaluate it empirically. We have the following corollary: Corollary 1. |b Combining them together finishes the proof. It is trivial to show that Corollary 1 can be extended to the death time, i.e., we also have After proving Corollary 1, we return to prove the theorem. In this section, first, we show how to extend PID to CNNs.



Towards Interaction Detection Using Topological Analysis on Neural Networks

Liu, Zirui, Song, Qingquan, Zhou, Kaixiong, Wang, Ting Hsiang, Shan, Ying, Hu, Xia

arXiv.org Machine Learning

Detecting statistical interactions between input features is a crucial and challenging task. Recent advances demonstrate that it is possible to extract learned interactions from trained neural networks. It has also been observed that, in neural networks, any interacting features must follow a strongly weighted connection to common hidden units. Motivated by the observation, in this paper, we propose to investigate the interaction detection problem from a novel topological perspective by analyzing the connectivity in neural networks. Specially, we propose a new measure for quantifying interaction strength, based upon the well-received theory of persistent homology. Based on this measure, a Persistence Interaction Detection (PID) algorithm is developed to efficiently detect interactions. Our proposed algorithm is evaluated across a number of interaction detection tasks on several synthetic and real world datasets with different hyperparameters. Experimental results validate that the PID algorithm outperforms the state-of-the-art baselines.


Detecting Statistical Interactions from Neural Network Weights

Tsang, Michael, Cheng, Dehua, Liu, Yan

arXiv.org Machine Learning

Interpreting neural networks is a crucial and challenging task in machine learning. In this paper, we develop a novel framework for detecting statistical interactions captured by a feedforward multilayer neural network by directly interpreting its learned weights. Depending on the desired interactions, our method can achieve significantly better or similar interaction detection performance compared to the state-of-the-art without searching an exponential solution space of possible interactions. We obtain this accuracy and efficiency by observing that interactions between input features are created by the non-additive effect of nonlinear activation functions, and that interacting paths are encoded in weight matrices. We demonstrate the performance of our method and the importance of discovered interactions via experimental results on both synthetic datasets and real-world application datasets.